-
1 параллелизм на уровне задач
1) General subject: task-level parallelism (параллелизм, определяемый принятым способом (исходящий из) организации параллельного исполнения нескольких задач или процессов. В однопроцессорных системах задачам выделяются кванты времени одного Ц)2) Programming: task parallelism (см. task-level parallelism)Универсальный русско-английский словарь > параллелизм на уровне задач
-
2 параллелизм задач
Computers: task parallelism -
3 параллелизм на уровне данных
1) General subject: data parallelism (организация и модель параллельных вычислений в многопроцессорной среде (напр. SIMD), при которой одновременно обрабатываются сразу несколько элементов потока или массива данных (т.е.)2) Programming: data-level parallelism (когда одна команда программы применяется одновременно ко многим элементам данных. Syn: SIMD. Ant: task-level parallelism)Универсальный русско-английский словарь > параллелизм на уровне данных
См. также в других словарях:
Task parallelism — (also known as function parallelism and control parallelism) is a form of parallelization of computer code across multiple processors in parallel computing environments. Task parallelism focuses on distributing execution processes (threads)… … Wikipedia
Data parallelism — (also known as loop level parallelism) is a form of parallelization of computing across multiple processors in parallel computing environments. Data parallelism focuses on distributing the data across different parallel computing nodes. It… … Wikipedia
Explicit parallelism — In computer programming, explicit parallelism is the representationof concurrent computations by means of primitivesin the form of special purpose directives or function calls. Most parallel primitives are related to process synchronization,… … Wikipedia
Memory-level parallelism — or MLP is a term in computer architecture referring to the ability to have pending multiple memory operations, in particular cache misses or translation lookaside buffer misses, at the same time. In a single processor, MLP may be considered a… … Wikipedia
Implicit parallelism — In computer science, implicit parallelism is a characteristic of a programming language that allows a compiler to automatically exploit the parallelism inherent to the computations expressed by some of the language s constructs. A pure implicitly … Wikipedia
Parallel computing — Programming paradigms Agent oriented Automata based Component based Flow based Pipelined Concatenative Concurrent computing … Wikipedia
OpenMP — Original author(s) OpenMP Architecture Review Board[1] Developer(s) OpenMP Architecture Review Board … Wikipedia
Data Intensive Computing — is a class of parallel computing applications which use a data parallel approach to processing large volumes of data typically terabytes or petabytes in size and typically referred to as Big Data. Computing applications which devote most of their … Wikipedia
Comparison of MPI, OpenMP, and Stream Processing — MPI= MPI is a language independent communications protocol used to program parallel computers. Both point to point and collective communication are supported. MPI is a message passing application programmer interface, together with protocol and… … Wikipedia
Parallel FX Library — (PFX) is a managed concurrency library being developed by a collaboration between Microsoft Research and the CLR team at Microsoft for inclusion with a future revision of the .NET Framework. It is composed of two parts: Parallel LINQ (PLINQ) and… … Wikipedia
Chapel (programming language) — Chapel is a new parallel programming language developed by Cray.[1] It is being developed as part of the Cray Cascade project, a participant in DARPA s High Productivity Computing Systems (HPCS) program, which has the goal of increasing… … Wikipedia